27,904 research outputs found

    Prompt Delay

    Get PDF
    Delay games are two-player games of infinite duration in which one player may delay her moves to obtain a lookahead on her opponent's moves. Recently, such games with quantitative winning conditions in weak MSO with the unbounding quantifier were studied, but their properties turned out to be unsatisfactory. In particular, unbounded lookahead is in general necessary. Here, we study delay games with winning conditions given by Prompt-LTL, Linear Temporal Logic equipped with a parameterized eventually operator whose scope is bounded. Our main result shows that solving Prompt-LTL delay games is complete for triply-exponential time. Furthermore, we give tight triply-exponential bounds on the necessary lookahead and on the scope of the parameterized eventually operator. Thus, we identify Prompt-LTL as the first known class of well-behaved quantitative winning conditions for delay games. Finally, we show that applying our techniques to delay games with \omega-regular winning conditions answers open questions in the cases where the winning conditions are given by non-deterministic, universal, or alternating automata

    How Much Lookahead is Needed to Win Infinite Games?

    Get PDF
    Delay games are two-player games of infinite duration in which one player may delay her moves to obtain a lookahead on her opponent's moves. For ω\omega-regular winning conditions it is known that such games can be solved in doubly-exponential time and that doubly-exponential lookahead is sufficient. We improve upon both results by giving an exponential time algorithm and an exponential upper bound on the necessary lookahead. This is complemented by showing EXPTIME-hardness of the solution problem and tight exponential lower bounds on the lookahead. Both lower bounds already hold for safety conditions. Furthermore, solving delay games with reachability conditions is shown to be PSPACE-complete. This is a corrected version of the paper https://arxiv.org/abs/1412.3701v4 published originally on August 26, 2016

    Measuring consumption smoothing in CEX data

    Get PDF
    This paper proposes and implements a new method of measuring the degree of consumption smoothing using data from the Consumer Expenditure Survey. The structure of this Survey is such that estimators previously used in the literature are inconsistent, simply because income is measured annually and consumption is measured quarterly. We impose an AR(1) structure on the income process to obtain a proxy for quarterly income through a projection on annual income. By construction, this proxy gives rise to a measurement error which is orthogonal to the proxy itself - as opposed to the unobserved regressor - leading to a consistent estimator. We contrast our estimates with the output of two estimators used in the literature. We show that while the first (OLS) estimator tends to overstate the degree of risk sharing, the second (IV) estimator grossly understates it <br><br> Keywords; risk sharing, consumption smoothing, income risk, projection

    Discovering Scholarly Orphans Using ORCID

    Full text link
    Archival efforts such as (C)LOCKSS and Portico are in place to ensure the longevity of traditional scholarly resources like journal articles. At the same time, researchers are depositing a broad variety of other scholarly artifacts into emerging online portals that are designed to support web-based scholarship. These web-native scholarly objects are largely neglected by current archival practices and hence they become scholarly orphans. We therefore argue for a novel paradigm that is tailored towards archiving these scholarly orphans. We are investigating the feasibility of using Open Researcher and Contributor ID (ORCID) as a supporting infrastructure for the process of discovery of web identities and scholarly orphans for active researchers. We analyze ORCID in terms of coverage of researchers, subjects, and location and assess the richness of its profiles in terms of web identities and scholarly artifacts. We find that ORCID currently lacks in all considered aspects and hence can only be considered in conjunction with other discovery sources. However, ORCID is growing fast so there is potential that it could achieve a satisfactory level of coverage and richness in the near future.Comment: 10 pages, 5 figures, 5 tables accepted for publication at JCDL 201

    On the Number of Membranes in Unary P Systems

    Full text link
    We consider P systems with a linear membrane structure working on objects over a unary alphabet using sets of rules resembling homomorphisms. Such a restricted variant of P systems allows for a unique minimal representation of the generated unary language and in that way for an effective solution of the equivalence problem. Moreover, we examine the descriptional complexity of unary P systems with respect to the number of membranes

    Extending Sitemaps for ResourceSync

    Full text link
    The documents used in the ResourceSync synchronization framework are based on the widely adopted document format defined by the Sitemap protocol. In order to address requirements of the framework, extensions to the Sitemap format were necessary. This short paper describes the concerns we had about introducing such extensions, the tests we did to evaluate their validity, and aspects of the framework to address them.Comment: 4 pages, 6 listings, accepted at JCDL 201

    Nucleus: A Pilot Project

    Get PDF
    Early in 2016, an environmental scan was conducted by the Research Library Data Working Group for three purposes: 1.) Perform a survey of the data management landscape at Los Alamos National Laboratory in order to identify local gaps in data management services. 2.) Conduct an environmental scan of external institutions to benchmark budgets, infrastructure, and personnel dedicated to data management. 3.) Draft a research data infrastructure model that aligns with the current workflow and classification restrictions at Los Alamos National Laboratory. This report is a summary of those activities and the draft for a pilot data management project.Comment: 13 pages, repor
    • …
    corecore